翻訳と辞書
Words near each other
・ Hamish Stuart
・ Hamish Swanston
・ Hamish Telfer Adam
・ Hamish Wallace
・ Hamish Watson
・ Hamish Watson (rugby union)
・ Hamish Watt
・ Hamish Wilson
・ Hamisheh Behar
・ HaMishtala
・ Hamiltonian vector field
・ Hamiltonsbawn
・ Hamiltonstövare
・ Hamilton–Brantford–Cambridge Trails
・ Hamilton–Jacobi equation
Hamilton–Jacobi–Bellman equation
・ Hamilton–Jacobi–Einstein equation
・ Hamilton–Norwood scale
・ Hamilton–Reynolds sex scandal
・ Hamilton–Scourge survey expedition
・ Hamilton—Wentworth (provincial electoral district)
・ Hamim Samuri
・ Hamin Ahmed
・ Hamin Milligan
・ Hamin, Iran
・ Hamina
・ Hamina Cadet School
・ Hamina Fortress
・ Hamina-class missile boat
・ Hamingja


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Hamilton–Jacobi–Bellman equation : ウィキペディア英語版
Hamilton–Jacobi–Bellman equation

The Hamilton–Jacobi–Bellman (HJB) equation is a partial differential equation which is central to optimal control theory. The solution of the HJB equation is the 'value function' which gives the minimum cost for a given dynamical system with an associated cost function.
When solved locally, the HJB is a necessary condition, but when solved over the whole of state space, the HJB equation is a necessary and sufficient condition for an optimum. The solution is open loop, but it also permits the solution of the closed loop problem. The HJB method can be generalized to stochastic systems as well.
Classical variational problems, for example the brachistochrone problem, can be solved using this method.
The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. The corresponding discrete-time equation is usually referred to as the Bellman equation. In continuous time, the result can be seen as an extension of earlier work in classical physics on the Hamilton–Jacobi equation by William Rowan Hamilton and Carl Gustav Jacob Jacobi.
==Optimal control problems==

Consider the following problem in deterministic optimal control over the time period ():
:V(x(0), 0) = \min_u \left\
where C() is the scalar cost rate function and ''D''() is a function that gives the economic value or utility at the final state, ''x''(''t'') is the system state vector, ''x''(0) is assumed given, and ''u''(''t'') for 0 ≤ ''t'' ≤ ''T'' is the control vector that we are trying to find.
The system must also be subject to
: \dot(t)=F() \,
where ''F''() gives the vector determining physical evolution of the state vector over time.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Hamilton–Jacobi–Bellman equation」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.